Images don’t lie: Duplicate crowdtesting reports detection with screenshot information
نویسندگان
چکیده
منابع مشابه
Assisted Detection of Duplicate Bug Reports
Duplicate bug reports, reports which describe problems or enhancements for which there is already a report in a bug repository, consume time of bug triagers and software developers that might better be spent working on reports that describe unique requests. For many open source projects, the number of duplicate reports represents a significant percentage of the repository, numbering in the thou...
متن کاملIdentification of MIR-Flickr Near-duplicate Images - A Benchmark Collection for Near-duplicate Detection
There are many contexts where the automated detection of near-duplicate images is important, for example the detection of copyright infringement or images of child abuse. There are many published methods for the detection of similar and near-duplicate images; however it is still uncommon for methods to be objectively compared with each other, probably because of a lack of any good framework in ...
متن کاملDuplicate bug reports considered harmful ... really?
In a survey we found that most developers have experienced duplicated bug reports, however, only few considered them as a serious problem. This contradicts popular wisdom that considers bug duplicates as a serious problem for open source projects. In the survey, developers also pointed out that the additional information provided by duplicates helps to resolve bugs quicker. In this paper, we th...
متن کاملMerging Duplicate Bug Reports by Sentence Clustering
Duplicate bug reports are often unfavorable because they tend to take many man hours for being identified as duplicates, marked so and eventually discarded. In this time, no progress occurs on the program in question, and is justifiably an overhead which should be minimized. Considerable research has been carried out to alleviate this problem. Many methods have been proposed for bug report cate...
متن کاملImproved Fuzzy Set Information Retrieval Approach on Duplicate Webpage Detection
Similar Web pages are easily found on Internet. The redundancy of information severely slows down internet applications such as crawl module of search engine, and could lead to waste of storage in the indexing procedure. In this paper, we proposed a content-based approach for detecting webpage duplications. The algorithm contains three parts: i) pre-processing, excluding HTML tags and unrelated...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Information and Software Technology
سال: 2019
ISSN: 0950-5849
DOI: 10.1016/j.infsof.2019.03.003